Batch Gradient Method for Training of Pi-Sigma Neural Network with Penalty
نویسندگان
چکیده
منابع مشابه
Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update
A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in tra...
متن کاملConvergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms
This paper investigates an online gradient method with innerpenalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higherorder networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performan...
متن کاملA conjugate gradient based method for Decision Neural Network training
Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...
متن کاملBoundedness of a Batch Gradient Method with Penalty for Feedforward Neural Networks
This paper considers a batch gradient method with penalty for training feedforward neural networks. The role of the penalty term is to control the magnitude of the weights and to improve the generalization performance of the network. An usual penalty is considered, which is a term proportional to the norm of the weights. The boundedness of the weights of the network is proved. The boundedness i...
متن کاملBatch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes os...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Artificial Intelligence & Applications
سال: 2016
ISSN: 0976-2191,0975-900X
DOI: 10.5121/ijaia.2016.7102